186 research outputs found

    Liquidity risk and the performance of UK mutual funds

    Get PDF
    We examine the role of liquidity risk, both as a stock characteristic as well as systematic liquidity risk, in UK mutual fund performance for the first time. Using four alternative measures of stock liquidity we extract principal components across stocks in order to construct systematic or market liquidity factors. We find that on average UK mutual funds are tilted towards liquid stocks (except for small stock funds as might be expected) but that, counter-intuitively, liquidity as a stock characteristic is positively priced in the cross-section of fund performance. We find that systematic liquidity risk is positively priced in the cross-section of fund performance. Overall, our results reveal a strong role for stock liquidity level and systematic liquidity risk in fund performance evaluation models

    Modelling Grocery Retail Topic Distributions: Evaluation, Interpretability and Stability

    Get PDF
    Understanding the shopping motivations behind market baskets has high commercial value in the grocery retail industry. Analyzing shopping transactions demands techniques that can cope with the volume and dimensionality of grocery transactional data while keeping interpretable outcomes. Latent Dirichlet Allocation (LDA) provides a suitable framework to process grocery transactions and to discover a broad representation of customers' shopping motivations. However, summarizing the posterior distribution of an LDA model is challenging, while individual LDA draws may not be coherent and cannot capture topic uncertainty. Moreover, the evaluation of LDA models is dominated by model-fit measures which may not adequately capture the qualitative aspects such as interpretability and stability of topics. In this paper, we introduce clustering methodology that post-processes posterior LDA draws to summarise the entire posterior distribution and identify semantic modes represented as recurrent topics. Our approach is an alternative to standard label-switching techniques and provides a single posterior summary set of topics, as well as associated measures of uncertainty. Furthermore, we establish a more holistic definition for model evaluation, which assesses topic models based not only on their likelihood but also on their coherence, distinctiveness and stability. By means of a survey, we set thresholds for the interpretation of topic coherence and topic similarity in the domain of grocery retail data. We demonstrate that the selection of recurrent topics through our clustering methodology not only improves model likelihood but also outperforms the qualitative aspects of LDA such as interpretability and stability. We illustrate our methods on an example from a large UK supermarket chain.Comment: 20 pages, 9 figure

    Liquidity commonality and pricing in UK equities

    Get PDF
    We investigate the pricing of systematic liquidity risk in UK equities using a large sample of daily data. Employing four alternative measures of liquidity we first find strong evidence of commonality in liquidity across stocks. We apply asymptotic principal component analysis (PCA) on the sample of stocks to extract market or systematic liquidity factors. Previous research on systematic liquidity risk, estimated using PCA, is focused on the US, which has very different market structures to the UK. Our pricing results indicate that systematic liquidity risk is positively priced in the cross-section of stocks, specifically for the quoted spread liquidity measure. These findings around the pricing of systematic liquidity risk are not affected by the level of individual stock liquidity as a risk characteristic. However, counter-intuitively, we find that the latter is negatively priced in the cross-section of stocks, confirming earlier research

    The asset pricing effects of UK market liquidity shocks: evidence from tick data

    Get PDF
    Using tick data covering a 12 year period including much of the recent financial crisis we provide an unprecedented examination of the relationship between liquidity and stock returns in the UK market. Previous research on liquidity using high frequency data omits the recent financial crisis and is focused on the US, which has a different market structure to the UK. We first construct several microstructure liquidity measures for FTSE All Share stocks, demonstrating that tick data reveal patterns in intra-day liquidity not observable with lower frequency daily data. Our asymptotic principal component analysis captures commonality in liquidity across stocks to construct systematic market liquidity factors. We find that cross-sectional differences in returns exist across portfolios sorted by liquidity risk. These are strongly robust to market, size and value risk. The inclusion of a momentum factor partially explains some of the liquidity premia but they remain statistically significant. However, during the crisis period a long liquidity risk strategy experiences significantly negative alphas

    Posterior summaries of grocery retail topic models: Evaluation, interpretability and credibility

    Get PDF
    Understanding the shopping motivations behind market baskets has significant commercial value for the grocery retail industry. The analysis of shopping transactions demands techniques that can cope with the volume and dimensionality of grocery transactional data while delivering interpretable outcomes. Latent Dirichlet allocation (LDA) allows processing grocery transactions and the discovering of customer behaviours. Interpretations of topic models typically exploit individual samples overlooking the uncertainty of single topics. Moreover, training LDA multiple times show topics with large uncertainty, that is, topics (dis)appear in some but not all posterior samples, concurring with various authors in the field. In response, we introduce a clustering methodology that post-processes posterior LDA draws to summarise topic distributions represented as recurrent topics. Our approach identifies clusters of topics that belong to different samples and provides associated measures of uncertainty for each group. Our proposed methodology allows the identification of an unconstrained number of customer behaviours presented as recurrent topics. We also establish a more holistic framework for model evaluation, which assesses topic models based not only on their predictive likelihood but also on quality aspects such as coherence and distinctiveness of single topics and credibility of a set of topics. Using the outcomes of a tailored survey, we set thresholds that aid in interpreting quality aspects in grocery retail data. We demonstrate that selecting recurrent topics not only improves predictive likelihood but also outperforms interpretability and credibility. We illustrate our methods with an example from a large British supermarket chain

    Regional Topics in British Grocery Retail Transactions

    Get PDF
    Understanding the customer behaviours behind transactional data has high commercial value in the grocery retail industry. Customers generate millions of transactions every day, choosing and buying products to satisfy specific shopping needs. Product availability may vary geographically due to local demand and local supply, thus driving the importance of analysing transactions within their corresponding store and regional context. Topic models provide a powerful tool in the analysis of transactional data, identifying topics that display frequently-bought-together products and summarising transactions as mixtures of topics. We use the Segmented Topic Model (STM) to capture customer behaviours that are nested within stores. STM not only provides topics and transaction summaries but also topical summaries at the store level that can be used to identify regional topics. We summarised the posterior distribution of STM by post-processing multiple posterior samples and selecting semantic modes represented as recurrent topics. We use linear Gaussian process regression to model topic prevalence across British territory while accounting for spatial autocorrelation. We implement our methods on a dataset of transactional data from a major UK grocery retailer and demonstrate that shopping behaviours may vary regionally and nearby stores tend to exhibit similar regional demand

    godash 2.0 - the next evolution of HAS evaluation

    Get PDF
    In this short demo paper, we introduce godash 2.0 godash is a headless HTTP adaptive streaming (HAS) video streaming platform written in the Google programming language GO. godash has been extensively rewritten for this release so as to provide ease of use, and a host of new features. godash includes options for eight different state of the art adaptive algorithms, five HAS profiles, four video codecs, the ability to stream audio and video segments, two transport protocols, real-time output from five Quality of Experience (QoE) models, as well as a collaborative framework for the evaluation of cooperative HAS streaming. godash also comes complete with its own testbed framework known as godashbed. godashbed uses a virtual environment to serve video content locally (which allows setting security certificates) through the Mininet virtual emulation tool. godashbed has options for large scale evaluation of HAS streaming using 4G/5G bandwidth traces, various modes of background traffic, and a choice of web server, namely: Web Server Gateway Interface (WSGI) and Asynchronous Server Gateway Interface (ASGI). In this manner, godash provides a framework for rapid deployment and testing of new HAS algorithms, QoE models and transport protocols

    Serum amyloid protein is associated with outcome following acute ischaemic stroke: data from the REmote ischaemic Conditioning After Stroke Trial (RECAST)

    Get PDF
    Background: Remote ischaemic per-conditioning (RIC) in experimental ischaemic stroke is neuroprotective. Several neurohumoral, vascular and inflammatory mediators are implicated. Methods: The REmote ischaemic Conditioning After Stroke Trial (RECAST) was a pilot blinded sham-controlled trial in patients with ischaemic stroke, randomised to receive four 5-minute cycles of RIC within 24 hours of ictus. Plasma taken pre-intervention, immediately post-intervention and on day 4 was analysed for nitric oxide (nitrate/nitrite) levels using chemiluminescence and other biomarkers were analysed by enzyme-linked immunosorbent assay (ELISA): alpha-2-macroglobulin (A2M), serum amyloid protein (SAP), e-selectin, vascular endothelial growth factor (VEGF). Biomarkers were correlated with outcome (Day 90 National Institutes of Health Stroke Scale [NIHSS], modified Rankin scale [mRS], Barthel index [BI]) using Pearson’s correlation coefficient. Results: In all 26 patients, an increase in SAP (pre- to post-intervention) positively correlated with worse day 90 mRS (r=0.429, p=0.029) and negatively with worse BI (r=-0.392, p=0.048), whilst an increase in SAP from day 0 to 4 positively correlated with worse day 90 NIHSS (r=0.400, p=0.043), mRS (r=0.505, p=0.008) and negatively with worse BI (r=-0.439, p=0.025). RIC reduced SAP levels from pre- to post-intervention (n=13, 2-way ANOVA, p<0.05), whilst sham did not. No significant changes over time or by treatment, or correlations with outcome were seen for A2M, e-selectin, nitric oxide or VEGF. Conclusion: Increased plasma levels of SAP are associated with worse clinical outcomes after ischaemic stroke. RIC reduced SAP levels from pre- to post-intervention. Larger studies assessing biomarkers and efficacy of RIC in acute ischaemic stroke are warranted

    Social media in health science education: an international survey

    Get PDF
    Background: Social media is an asset that higher education students can use for an array of purposes. Studies have shown the merits of social media use in educational settings; however, its adoption in health science education has been slow, and the contributing reasons remain unclear. Objective: This multidisciplinary study aimed to examine health science students’ opinions on the use of social media in health science education and identify factors that may discourage its use. Methods: Data were collected from the Universitas 21 “Use of social media in health education” survey, distributed electronically among the health science staff and students from 8 universities in 7 countries. The 1640 student respondents were grouped as users or nonusers based on their reported frequency of social media use in their education. Results: Of the 1640 respondents, 1343 (81.89%) use social media in their education. Only 462 of the 1320 (35.00%) respondents have received specific social media training, and of those who have not, the majority (64.9%, 608/936) would like the opportunity. Users and nonusers reported the same 3 factors as the top barriers to their use of social media: uncertainty on policies, concerns about professionalism, and lack of support from the department. Nonusers reported all the barriers more frequently and almost half of nonusers reported not knowing how to incorporate social media into their learning. Among users, more than one fifth (20.5%, 50/243) of students who use social media “almost always” reported sharing clinical images without explicit permission. Conclusions: Our global, interdisciplinary study demonstrates that a significant number of students across all health science disciplines self-reported sharing clinical images inappropriately, and thus request the need for policies and training specific to social media use in health science education

    Fisheries long term monitoring program : summary of tailor (Pomatomus saltatrix) survey results: 1999-2004

    Get PDF
    Tailor (Pomatomus saltatrix) is a schooling species with a world-wide distribution in subtropical waters that inhabits the coastal waters of southern Australia (Williams 2002). Its distribution in Australian waters ranges from the northern tip of Fraser Island in Queensland to Onslow in Western Australia (Kailola et al. 1993). Queensland commercial and recreational fishers target these schools on ocean beaches between Fraser Island and the New South Wales border, during their annual spawning migration between late winter and spring (Leigh and O’Neill 2004). The estimated harvest of tailor for the commercial sector is 155 t (2004–05), and between 450 and 540 t (2002) by recreational fishers. The Queensland tailor fishery is managed by the Department of Primary Industries and Fisheries under the Fisheries Regulation 1995. The current management arrangements include spatial and seasonal closures, minimum legal size limit, limited commercial entry, annual commercial quota and recreational possession limit. The Long Term Monitoring Program (LTMP) monitors the tailor stock by investigating the length, weight, sex and age of the commercially and recreationally caught tailor from the ocean beach sector. This report presents a summary of the data collected from 1999 to 2004. Since 1999, the LTMP has collected 14 486 tailor with over half of those fish collected from zones not included in the seasonal closures. The modal length frequency of tailor was between 300 and 370 mm for all years, sexes and regions. There was a significant relationship between length and weight of tailor, yet no difference between sex or region. The majority of tailor collected were aged as one and two year olds, with very few tailor collected of age three or older. The growth of tailor was similar for both sexes and all regions. The majority of the length and age frequency data are representative of the recreational ocean beach fishery on Fraser Island, which is only part of the fishery. It is therefore suggested to extend the monitoring of the commercial catch samples and the recreational catch samples to other regions of the sampling area. There were also limited samples collected of tailor larger than 500 mm and at the age of three years or older. Any extension of the program should also focus on acquiring samples of larger fish to help complete the tailor growth curve
    • …
    corecore